Main Factors

The Problem with creating stimuli

With free y-axis scaling, it’s (relatively) easy to distinguish the shapes of various curves on a linear scale (quartic is included because it’s temporarily larger than exponential). However, once the log scale is introduced, it’s hard to distinguish the shape of the linear, quadratic, and quartic curves.

Other issues…

If we instead control the y scaling and scale x, you can actually distinguish the curves on the log axis, but not on the linear axis. The cognitive load of having to read the axes isn’t as overwhelming, though, because the scales line up vertically.

If we instead scale y manually, things get a little more distinguishable…

One solution

If we control the start and end points of each function, we can fit quadratic/quartic/linear/exponential functions to the points, solve for other parameters, and get reasonably flexible functions that have the same domain and range.

A better plan might be to just use Taylor expansion - saves us the trouble of fitting regressions…

The problem is that for odd-order approximations, y is negative for x < a (expansion center). That’s not super optimal… also, because we’re not fitting both endpoints, the ranges don’t match that well.

We can expand at a point slightly outside the domain, which results in more reasonable looking plots, but we’re still right back at the range problem.

Lineup Options

Quadratic vs. Cubic (Plot #1)

Distinguishing between polynomial effects:

Quartic vs. Exponential (Plot #1)

Quadratic vs. Exponential (Plot #1)

Random coefficients (slight variations)

Error structure is going to matter a ton, as is any scaling method.

Scaling w/ additive errors on log scale

Here, \(\epsilon \sim N(0, .25^2)\)

Scaling w/ additive errors on linear scale

Here, \(\epsilon \sim N(1, .25^2)\) because we can’t show below-0 values on the log scale

## [1] 8

Scaling w/ additive errors on log AND linear scale

This corresponds to data where there are both additive and multiplicative errors… Here, \(\epsilon_a \sim N(1, .125^2)\) because we can’t show below-0 values on the log scale, and \(\epsilon_b\sim N(0, .125^2)\) is the error on the log scale. (variance change because using a SD of 0.25 for both errors overwhelmed any signal.)

## [1] 3

Additional failed attempts…

Lesson: Order of operations matters a lot. Sigh.

Additive errors at original scale

Here, \(\epsilon \sim N(1, 0.25^2)\) because we need nonnegative values for the log transform to work.

## [1] 13

That seems to rather negate the utility of the log scale (at least, at this scale… )

If we instead just don’t scale things, and use \(\epsilon \sim N(0, .25^2)\), that doesn’t really work.

## [1] 18

Multiplicative errors at original scale (Additive at log scale)

If we add \(\epsilon \sim N(0, \sigma^2)\) in before exponentiation, and then scale, the plots look like this:

## [1] 19

But, then, our errors are not all on the same scale. That could introduce visual artifacts (though in this case it’s hard to really see a variability difference).

Alternately, we could add the errors post-scaling.

This requires us to scale the data, then log transform, then add error, then exponentiate again, because the scaling is essential to end up with comparable y axes. This leads to some interesting visual artifacts…

## [1] 1